2025-06-08 17:28:08,343 [ 252462 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:42, check_args_and_update_paths) 2025-06-08 17:28:08,344 [ 252462 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:86, check_args_and_update_paths) 2025-06-08 17:28:08,344 [ 252462 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:97, check_args_and_update_paths) 2025-06-08 17:28:08,344 [ 252462 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:99, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_y0o5jx --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=2993bc2bf171 -e DOCKER_KERBERIZED_HADOOP_TAG=ce74919e88f5 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=a2d3dc777d0c -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e CLICKHOUSE_USE_OLD_ANALYZER=1 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication test_insert_into_distributed/test.py::test_prefer_localhost_replica test_insert_into_distributed/test.py::test_table_function test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local 'test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0]' 'test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1]' test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement test_insert_over_http_query_log/test.py::test_insert_over_http_ok test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table test_interserver_dns_retires/test.py::test_query test_jbod_ha/test.py::test_jbod_ha test_jdbc_bridge/test.py::test_jdbc_delete test_jdbc_bridge/test.py::test_jdbc_distributed_query test_jdbc_bridge/test.py::test_jdbc_insert test_jdbc_bridge/test.py::test_jdbc_query test_jdbc_bridge/test.py::test_jdbc_table_engine test_jdbc_bridge/test.py::test_jdbc_update test_keeper_and_access_storage/test.py::test_create_replicated test_keeper_availability_zone/test.py::test_get_availability_zone test_keeper_memory_soft_limit/test.py::test_soft_limit_create test_keeper_persistent_log_multinode/test.py::test_restart_multinode test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader test_keeper_secure_client/test.py::test_connection test_library_bridge/test_exiled.py::test_bridge_dies_with_parent test_log_levels_update/test.py::test_log_levels_update test_merge_tree_load_parts/test.py::test_merge_tree_load_parts test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 'test_merge_tree_s3/test.py::test_alter_table_columns[node]' 'test_merge_tree_s3/test.py::test_attach_detach_partition[node]' 'test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk]' 'test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node]' 'test_merge_tree_s3/test.py::test_freeze_unfreeze[node]' 'test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node]' 'test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node]' 'test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node]' 'test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node]' 'test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node]' 'test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node]' 'test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node]' 'test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node]' 'test_merge_tree_s3/test.py::test_s3_no_delete_objects[node]' 'test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node]' 'test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node]' 'test_merge_tree_s3/test.py::test_table_manipulations[node]' 'test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2]' 'test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1]' 'test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2]' 'test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1]' test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1]' test_partition/test.py::test_attach_check_all_parts test_partition/test.py::test_cannot_attach_active_part test_partition/test.py::test_detached_part_dir_exists test_partition/test.py::test_drop_detached_parts test_partition/test.py::test_make_clone_in_detached test_partition/test.py::test_partition_complex test_partition/test.py::test_partition_simple test_partition/test.py::test_system_detached_parts test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed test_placement_info/test.py::test_placement_info_from_config test_placement_info/test.py::test_placement_info_from_file test_placement_info/test.py::test_placement_info_from_imds test_placement_info/test.py::test_placement_info_missing_data test_postgresql_protocol/test.py::test_java_client test_postgresql_protocol/test.py::test_psql_client test_postgresql_protocol/test.py::test_python_client test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index -vvv" altinityinfra/integration-tests-runner:9d492c2eec24 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: order-1.0.1, random-0.2, timeout-2.2.0, repeat-0.9.3, reportlog-0.4.0, xdist-3.5.0 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_placement_info/test.py::test_placement_info_from_config test_partition/test.py::test_attach_check_all_parts test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2] test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0] test_merge_tree_s3/test.py::test_alter_table_columns[node] test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication test_jdbc_bridge/test.py::test_jdbc_delete test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0] [gw8] [ 1%] PASSED test_placement_info/test.py::test_placement_info_from_config test_placement_info/test.py::test_placement_info_from_file [gw3] [ 2%] PASSED test_partition/test.py::test_attach_check_all_parts test_partition/test.py::test_cannot_attach_active_part [gw3] [ 3%] PASSED test_partition/test.py::test_cannot_attach_active_part test_partition/test.py::test_detached_part_dir_exists [gw6] [ 4%] PASSED test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2] test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1] [gw8] [ 5%] PASSED test_placement_info/test.py::test_placement_info_from_file test_placement_info/test.py::test_placement_info_from_imds [gw3] [ 6%] PASSED test_partition/test.py::test_detached_part_dir_exists test_partition/test.py::test_drop_detached_parts [gw6] [ 7%] PASSED test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1] test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2] [gw5] [ 8%] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0] test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1] [gw6] [ 9%] PASSED test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2] test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1] [gw1] [ 10%] FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication [gw5] [ 11%] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1] test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement [gw8] [ 12%] PASSED test_placement_info/test.py::test_placement_info_from_imds test_placement_info/test.py::test_placement_info_missing_data [gw5] [ 13%] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement test_insert_over_http_query_log/test.py::test_insert_over_http_ok [gw3] [ 14%] PASSED test_partition/test.py::test_drop_detached_parts [gw8] [ 15%] PASSED test_placement_info/test.py::test_placement_info_missing_data test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_partition/test.py::test_make_clone_in_detached [gw6] [ 16%] PASSED test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1] [gw5] [ 17%] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_ok test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table [gw5] [ 18%] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table [gw4] [ 19%] PASSED test_jdbc_bridge/test.py::test_jdbc_delete test_jdbc_bridge/test.py::test_jdbc_distributed_query [gw4] [ 20%] PASSED test_jdbc_bridge/test.py::test_jdbc_distributed_query test_jdbc_bridge/test.py::test_jdbc_insert [gw4] [ 21%] PASSED test_jdbc_bridge/test.py::test_jdbc_insert test_jdbc_bridge/test.py::test_jdbc_query [gw4] [ 22%] PASSED test_jdbc_bridge/test.py::test_jdbc_query test_jdbc_bridge/test.py::test_jdbc_table_engine test_merge_tree_load_parts/test.py::test_merge_tree_load_parts [gw4] [ 23%] PASSED test_jdbc_bridge/test.py::test_jdbc_table_engine test_jdbc_bridge/test.py::test_jdbc_update [gw3] [ 24%] PASSED test_partition/test.py::test_make_clone_in_detached test_partition/test.py::test_partition_complex [gw4] [ 25%] PASSED test_jdbc_bridge/test.py::test_jdbc_update [gw7] [ 26%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where [gw3] [ 27%] PASSED test_partition/test.py::test_partition_complex test_partition/test.py::test_partition_simple test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node [gw7] [ 28%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where [gw3] [ 29%] PASSED test_partition/test.py::test_partition_simple test_partition/test.py::test_system_detached_parts [gw1] [ 30%] FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value [gw7] [ 31%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree [gw1] [ 32%] FAILED test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value [gw0] [ 33%] PASSED test_merge_tree_s3/test.py::test_alter_table_columns[node] test_merge_tree_s3/test.py::test_attach_detach_partition[node] test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart [gw2] [ 34%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0] test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1] [gw1] [ 35%] FAILED test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions [gw7] [ 36%] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree [gw0] [ 37%] PASSED test_merge_tree_s3/test.py::test_attach_detach_partition[node] test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk] [gw2] [ 38%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1] test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0] test_keeper_availability_zone/test.py::test_get_availability_zone [gw1] [ 39%] FAILED test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions [gw9] [ 40%] PASSED test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication test_insert_into_distributed/test.py::test_prefer_localhost_replica test_postgresql_replica_database_engine_1/test.py::test_different_data_types [gw3] [ 41%] PASSED test_partition/test.py::test_system_detached_parts [gw1] [ 42%] FAILED test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_protocol/test.py::test_java_client [gw2] [ 43%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0] test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1] [gw9] [ 44%] PASSED test_insert_into_distributed/test.py::test_prefer_localhost_replica test_insert_into_distributed/test.py::test_table_function [gw9] [ 45%] PASSED test_insert_into_distributed/test.py::test_table_function test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local [gw5] [ 46%] PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed [gw5] [ 47%] PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed [gw2] [ 48%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1] test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0] test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader test_keeper_and_access_storage/test.py::test_create_replicated [gw2] [ 49%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0] test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1] test_jbod_ha/test.py::test_jbod_ha [gw4] [ 50%] PASSED test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local [gw6] [ 51%] PASSED test_postgresql_protocol/test.py::test_java_client test_postgresql_protocol/test.py::test_psql_client [gw6] [ 52%] PASSED test_postgresql_protocol/test.py::test_psql_client test_postgresql_protocol/test.py::test_python_client [gw6] [ 53%] PASSED test_postgresql_protocol/test.py::test_python_client [gw1] [ 54%] PASSED test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables [gw7] [ 55%] PASSED test_keeper_availability_zone/test.py::test_get_availability_zone [gw2] [ 56%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1] test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0] [gw1] [ 57%] FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_interserver_dns_retires/test.py::test_query test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_keeper_persistent_log_multinode/test.py::test_restart_multinode [gw8] [ 58%] PASSED test_merge_tree_load_parts/test.py::test_merge_tree_load_parts test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted [gw2] [ 59%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0] test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1] [gw1] [ 60%] FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries [gw5] [ 61%] PASSED test_keeper_and_access_storage/test.py::test_create_replicated [gw2] [ 62%] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1] test_log_levels_update/test.py::test_log_levels_update [gw8] [ 63%] PASSED test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error [gw4] [ 64%] PASSED test_interserver_dns_retires/test.py::test_query test_library_bridge/test_exiled.py::test_bridge_dies_with_parent [gw8] [ 65%] SKIPPED test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error [gw1] [ 66%] FAILED test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases [gw9] [ 67%] PASSED test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader [gw7] [ 68%] PASSED test_keeper_persistent_log_multinode/test.py::test_restart_multinode test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 [gw4] [ 69%] SKIPPED test_library_bridge/test_exiled.py::test_bridge_dies_with_parent [gw5] [ 70%] PASSED test_log_levels_update/test.py::test_log_levels_update [gw1] [ 71%] FAILED test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 [gw1] [ 72%] FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 [gw1] [ 73%] FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index [gw1] [ 74%] FAILED test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index [gw0] [ 75%] PASSED test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk] test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node] [gw0] [ 76%] PASSED test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node] test_merge_tree_s3/test.py::test_freeze_unfreeze[node] [gw0] [ 77%] PASSED test_merge_tree_s3/test.py::test_freeze_unfreeze[node] test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node] test_keeper_secure_client/test.py::test_connection [gw7] [ 78%] PASSED test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 [gw3] [ 79%] PASSED test_jbod_ha/test.py::test_jbod_ha [gw0] [ 80%] PASSED test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node] test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node] test_keeper_memory_soft_limit/test.py::test_soft_limit_create [gw0] [ 81%] PASSED test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node] test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node] [gw0] [ 82%] PASSED test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node] test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node] [gw9] [ 83%] PASSED test_keeper_secure_client/test.py::test_connection [gw3] [ 84%] PASSED test_keeper_memory_soft_limit/test.py::test_soft_limit_create [gw0] [ 85%] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node] test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3] [gw0] [ 86%] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3] test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part] [gw0] [ 87%] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part] test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node] [gw0] [ 88%] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node] test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node] [gw0] [ 89%] PASSED test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node] test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node] [gw0] [ 90%] PASSED test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node] test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node] [gw0] [ 91%] PASSED test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node] test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node] [gw0] [ 92%] PASSED test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node] test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node] [gw0] [ 93%] PASSED test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node] test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] [gw0] [ 94%] SKIPPED test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] [gw0] [ 95%] SKIPPED test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] [gw0] [ 96%] SKIPPED test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] test_merge_tree_s3/test.py::test_s3_no_delete_objects[node] [gw0] [ 97%] PASSED test_merge_tree_s3/test.py::test_s3_no_delete_objects[node] test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node] [gw0] [ 98%] PASSED test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node] test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node] [gw0] [ 99%] PASSED test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node] test_merge_tree_s3/test.py::test_table_manipulations[node] [gw0] [100%] PASSED test_merge_tree_s3/test.py::test_table_manipulations[node] =================================== FAILURES =================================== _____________ test_abrupt_connection_loss_while_heavy_replication ______________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_abrupt_connection_loss_while_heavy_replication(started_cluster): def transaction(thread_id): if thread_id % 2: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=True, ) else: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) if thread_id % 2 == 0: conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0) threads_num = 6 threads = [] for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() # Join here because it takes time for data to reach wal time.sleep(2) started_cluster.pause_container("postgres1") # for i in range(NUM_TABLES): # result = instance.query(f"SELECT count() FROM test_database.postgresql_replica_{i}") # print(result) # Just debug started_cluster.unpause_container("postgres1") > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:752: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f1403338ac3 E 20. ? @ 0x00007f14033ca850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml PostgreSQL is available - running test ------------------------------ Captured log setup ------------------------------ 2025-06-08 17:28:12 [ 549 ] DEBUG : Command:['docker ps | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:28:12 [ 549 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2025-06-08 17:28:12 [ 549 ] DEBUG : No running containers (conftest.py:92, cleanup_environment) 2025-06-08 17:28:12 [ 549 ] DEBUG : Pruning Docker networks (conftest.py:94, cleanup_environment) 2025-06-08 17:28:12 [ 549 ] DEBUG : Command:['docker network prune --force'] (cluster.py:113, run_and_check) 2025-06-08 17:28:12 [ 549 ] DEBUG : Command:["sysctl net.ipv4.ip_local_port_range='55000 65535'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:12 [ 549 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:121, run_and_check) 2025-06-08 17:28:12 [ 549 ] INFO : Running tests in /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/test.py (cluster.py:2659, start) 2025-06-08 17:28:12 [ 549 ] DEBUG : Cluster start called. is_up=False (cluster.py:2666, start) 2025-06-08 17:28:13 [ 549 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:28:13 [ 549 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:28:13 [ 549 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:28:13 [ 549 ] DEBUG : Cleanup called (cluster.py:801, cleanup) 2025-06-08 17:28:13 [ 549 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:28:13 [ 549 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:28:13 [ 549 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:28:13 [ 549 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2025-06-08 17:28:13 [ 549 ] DEBUG : Unstopped containers: {} (cluster.py:815, cleanup) 2025-06-08 17:28:13 [ 549 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine1 (cluster.py:829, cleanup) 2025-06-08 17:28:13 [ 549 ] DEBUG : Trying to prune unused networks... (cluster.py:835, cleanup) 2025-06-08 17:28:13 [ 549 ] DEBUG : Trying to prune unused images... (cluster.py:851, cleanup) 2025-06-08 17:28:13 [ 549 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2025-06-08 17:28:13 [ 549 ] DEBUG : Stderr:Error response from daemon: a prune operation is already running (cluster.py:123, run_and_check) 2025-06-08 17:28:13 [ 549 ] DEBUG : Exitcode:1 (cluster.py:125, run_and_check) 2025-06-08 17:28:13 [ 549 ] DEBUG : Trying to prune unused volumes... (cluster.py:860, cleanup) 2025-06-08 17:28:13 [ 549 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:28:13 [ 549 ] DEBUG : Stdout:1 (cluster.py:121, run_and_check) 2025-06-08 17:28:13 [ 549 ] DEBUG : Setup directory for instance: instance (cluster.py:2679, start) 2025-06-08 17:28:13 [ 549 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4383, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Create directory for common tests configuration (cluster.py:4388, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Copy common configuration from helpers (cluster.py:4408, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Generate and write macros file (cluster.py:4441, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/configs/log_conf.xml'] to /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/configs/config.d (cluster.py:4471, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/database (cluster.py:4488, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/logs (cluster.py:4499, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Entrypoint cmd: bash -c "trap 'pkill tail' INT TERM; clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon; coproc tail -f /dev/null; wait $$!" (cluster.py:4582, create_dir) 2025-06-08 17:28:13 [ 549 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'POSTGRES_PORT': '5432', 'POSTGRES_DIR': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', 'POSTGRES_LOGS_FS': 'bind'} stored in /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env (cluster.py:86, _create_env_file) 2025-06-08 17:28:13 [ 549 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-08 17:28:13 [ 549 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-08 17:28:13 [ 549 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-08 17:28:13 [ 549 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-08 17:28:13 [ 549 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:546, _make_request) 2025-06-08 17:28:13 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'pull'] (cluster.py:113, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling postgres1 ... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling instance ... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling postgres1 ... pulling from library/postgres (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling postgres1 ... digest: sha256:6efd0df010dc3cb40d... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling postgres1 ... status: image is up to date for p... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling postgres1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling instance ... pulling from altinityinfra/integr... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling instance ... digest: sha256:8a2c68e2d63d82c826... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling instance ... status: image is up to date for a... (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Stderr:Pulling instance ... done (cluster.py:123, run_and_check) 2025-06-08 17:28:24 [ 549 ] DEBUG : Setup Postgres (cluster.py:2791, start) 2025-06-08 17:28:24 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', '--verbose', 'up', '-d'] (cluster.py:113, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.config.config.find: Using configuration files: /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.docker_client.get_client: docker-compose version 1.29.2, build unknown (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:docker-py version: (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:CPython version: 3.10.12 (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:OpenSSL version: OpenSSL 3.0.2 15 Mar 2022 (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker base_url: http+docker://localhost (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.docker_client.get_client: Docker version: Platform={'Name': 'Docker Engine - Community'}, Components=[{'Name': 'Engine', 'Version': '23.0.6', 'Details': {'ApiVersion': '1.42', 'Arch': 'amd64', 'BuildTime': '2023-05-05T21:18:13.000000000+00:00', 'Experimental': 'false', 'GitCommit': '9dbdbd4', 'GoVersion': 'go1.19.9', 'KernelVersion': '5.15.0-130-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}, {'Name': 'containerd', 'Version': '1.7.18', 'Details': {'GitCommit': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e'}}, {'Name': 'runc', 'Version': '1.7.18', 'Details': {'GitCommit': 'v1.1.13-0-g58aa920'}}, {'Name': 'docker-init', 'Version': '0.19.0', 'Details': {'GitCommit': 'de40ad0'}}], Version=23.0.6, ApiVersion=1.42, MinAPIVersion=1.12, GitCommit=9dbdbd4, GoVersion=go1.19.9, Os=linux, Arch=amd64, KernelVersion=5.15.0-130-generic, BuildTime=2023-05-05T21:18:13.000000000+00:00 (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info <- () (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker info -> {'Architecture': 'x86_64', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'BridgeNfIp6tables': True, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'BridgeNfIptables': True, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'CPUSet': True, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'CPUShares': True, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'CgroupDriver': 'cgroupfs', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'CgroupVersion': '2', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'ContainerdCommit': {'Expected': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'ID': 'ae71819c4f5e67bb4d5ae76a6b735f29cc25774e'}, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Containers': 0, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.network.ensure: Creating network "roottestpostgresqlreplicadatabaseengine1_default" with the default driver (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network <- (name='roottestpostgresqlreplicadatabaseengine1_default', driver=None, options=None, ipam=None, internal=False, enable_ipv6=False, labels={'com.docker.compose.project': 'roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.network': 'default', 'com.docker.compose.version': '1.29.2'}, attachable=True, check_duplicate=True) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_network -> {'Id': '9bc6bff812395114cc11fdf14b5e6cd3811e6009e9ba954db5cf9a9b37298189', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Warning': ''} (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=False, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {} (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service=postgres1', 'com.docker.compose.oneoff=False']}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 0 items) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: {ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1)} (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Starting producer thread for ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('postgres') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Author': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Comment': 'buildkit.dockerfile.v0', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Config': {'AttachStderr': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'AttachStdin': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'AttachStdout': False, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Cmd': ['postgres'], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Domainname': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Entrypoint': ['docker-entrypoint.sh'], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Env': ['PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/17/bin', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.service.build_container_labels: Added config hash: 93968ec629d91980ea64eb0c7d74531b9606057f28d2f63fd9c8a6b21e104bb1 (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config <- (links=[], port_bindings={}, binds=[], volumes_from=[], privileged=False, network_mode='roottestpostgresqlreplicadatabaseengine1_default', devices=None, device_requests=None, dns=None, dns_opt=None, dns_search=None, restart_policy={'Name': 'always', 'MaximumRetryCount': 0}, runtime=None, cap_add=None, cap_drop=None, mem_limit=None, mem_reservation=None, memswap_limit=None, ulimits=None, log_config={'Type': '', 'Config': {}}, extra_hosts=None, read_only=None, pid_mode=None, security_opt=None, ipc_mode=None, cgroup_parent=None, cpu_quota=None, shm_size=None, sysctls=None, pids_limit=None, tmpfs=None, oom_kill_disable=None, oom_score_adj=None, mem_swappiness=None, group_add=None, userns_mode=None, init=None, init_path=None, isolation=None, cpu_count=None, cpu_percent=None, nano_cpus=None, volume_driver=None, cpuset_cpus=None, cpu_shares=None, storage_opt=None, blkio_weight=None, blkio_weight_device=None, device_read_bps=None, device_read_iops=None, device_write_bps=None, device_write_iops=None, mounts=[{'Target': '/postgres/', 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', 'Type': 'bind', 'ReadOnly': None}], device_cgroup_rules=None, cpu_period=None, cpu_rt_period=None, cpu_rt_runtime=None) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_host_config -> {'Binds': [], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Links': [], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'LogConfig': {'Config': {}, 'Type': ''}, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Mounts': [{'ReadOnly': None, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Target': '/postgres/', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Type': 'bind'}], (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'NetworkMode': 'roottestpostgresqlreplicadatabaseengine1_default', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'PortBindings': {}, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'RestartPolicy': {'MaximumRetryCount': 0, 'Name': 'always'}, (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container <- (command=['postgres', '-c', 'wal_level=logical', '-c', 'max_replication_slots=4', '-c', 'logging_collector=on', '-c', 'log_directory=/postgres/logs', '-c', 'log_filename=postgresql.log', '-c', 'log_statement=all', '-c', 'max_connections=200'], environment=['POSTGRES_HOST_AUTH_METHOD=trust', 'POSTGRES_PASSWORD=mysecretpassword', 'PGDATA=/postgres/data'], healthcheck={'test': ['CMD-SHELL', 'pg_isready -U postgres'], 'interval': 10000000000, 'timeout': 5000000000, 'retries': 5}, image='postgres', volumes={}, name='roottestpostgresqlreplicadatabaseengine1_postgres1_1', detach=True, ports=['5432'], labels={'com.docker.compose.project': 'roottestpostgresqlreplicadatabaseengine1', 'com.docker.compose.service': 'postgres1', 'com.docker.compose.oneoff': 'False', 'com.docker.compose.project.working_dir': '/ClickHouse/tests/integration/compose', 'com.docker.compose.project.config_files': '/ClickHouse/tests/integration/compose/docker_compose_postgres.yml', 'com.docker.compose.project.environment_file': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', 'com.docker.compose.container-number': '1', 'com.docker.compose.version': '1.29.2', 'com.docker.compose.config-hash': '93968ec629d91980ea64eb0c7d74531b9606057f28d2f63fd9c8a6b21e104bb1'}, host_config={'NetworkMode': 'roottestpostgresqlreplicadatabaseengine1_default', 'RestartPolicy': {'Name': 'always', 'MaximumRetryCount': 0}, 'VolumesFrom': [], 'Binds': [], 'PortBindings': {}, 'Links': [], 'LogConfig': {'Type': '', 'Config': {}}, 'Mounts': [{'Target': '/postgres/', 'Source': '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/postgres/postgres1', 'Type': 'bind', 'ReadOnly': None}]}, networking_config={'EndpointsConfig': {'roottestpostgresqlreplicadatabaseengine1_default': {'Aliases': ['postgre-sql.local', 'postgres1'], 'IPAMConfig': {}}}}) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'b465612cec0a2e1c309ac13380f9feceef63b9378d9903f9ac5bc3d89a4488aa', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Warnings': []} (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('b465612cec0a2e1c309ac13380f9feceef63b9378d9903f9ac5bc3d89a4488aa') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'Args': ['postgres', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'wal_level=logical', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'max_replication_slots=4', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'logging_collector=on', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: '-c', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr: 'log_directory=/postgres/logs', (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:... (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('b465612cec0a2e1c309ac13380f9feceef63b9378d9903f9ac5bc3d89a4488aa', 'roottestpostgresqlreplicadatabaseengine1_default') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('b465612cec0a2e1c309ac13380f9feceef63b9378d9903f9ac5bc3d89a4488aa', 'roottestpostgresqlreplicadatabaseengine1_default', aliases=['postgre-sql.local', 'postgres1', 'b465612cec0a'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start <- ('b465612cec0a2e1c309ac13380f9feceef63b9378d9903f9ac5bc3d89a4488aa') (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.cli.verbose_proxy.proxy_callable: docker start -> None (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='roottestpostgresqlreplicadatabaseengine1', service='postgres1', number=1) (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.parallel_execute_iter: Finished processing: (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : Stderr:compose.parallel.feed_queue: Pending: set() (cluster.py:123, run_and_check) 2025-06-08 17:28:25 [ 549 ] DEBUG : get_instance_ip instance_name=postgres1 (cluster.py:2008, get_instance_ip) 2025-06-08 17:28:25 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_postgres1_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:25 [ 549 ] DEBUG : Can't connect to Postgres connection to server at "172.16.3.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:28:26 [ 549 ] DEBUG : Can't connect to Postgres connection to server at "172.16.3.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:28:26 [ 549 ] DEBUG : Can't connect to Postgres connection to server at "172.16.3.2", port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections? (cluster.py:2251, wait_postgres_to_start) 2025-06-08 17:28:27 [ 549 ] DEBUG : Postgres Started (cluster.py:2248, wait_postgres_to_start) 2025-06-08 17:28:27 [ 549 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker-compose --env-file /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env --project-name roottestpostgresqlreplicadatabaseengine1 --file /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml up -d --no-recreate') (cluster.py:3002, start) 2025-06-08 17:28:27 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'up', '-d', '--no-recreate'] (cluster.py:113, run_and_check) 2025-06-08 17:28:27 [ 549 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:28:27 [ 549 ] DEBUG : Stderr:Creating roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:28:27 [ 549 ] DEBUG : ClickHouse instance created (cluster.py:3010, start) 2025-06-08 17:28:27 [ 549 ] DEBUG : get_instance_ip instance_name=instance (cluster.py:2008, get_instance_ip) 2025-06-08 17:28:27 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_instance_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:27 [ 549 ] DEBUG : Waiting for ClickHouse start in instance, ip: 172.16.3.3... (cluster.py:3017, start) 2025-06-08 17:28:27 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/roottestpostgresqlreplicadatabaseengine1_instance_1/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:27 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:28 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : http://localhost:None "GET /v1.42/containers/35c8139b948267ae281c134c303e0ce933d7bd283b0edb39aaf63f65111d85ea/json HTTP/1.1" 200 None (connectionpool.py:546, _make_request) 2025-06-08 17:28:29 [ 549 ] DEBUG : ClickHouse instance started (cluster.py:3021, start) 2025-06-08 17:28:29 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:30 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0;thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:28:30 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:30 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:28:30 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:28:36 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'pause', 'postgres1'] (cluster.py:113, run_and_check) 2025-06-08 17:28:36 [ 549 ] DEBUG : Stderr:Pausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:28:36 [ 549 ] DEBUG : Stderr:Pausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:28:36 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'unpause', 'postgres1'] (cluster.py:113, run_and_check) 2025-06-08 17:28:36 [ 549 ] DEBUG : Stderr:Unpausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:28:36 [ 549 ] DEBUG : Stderr:Unpausing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:28:36 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:36 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:28:37 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:28:37 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:28:37 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:38 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:38 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ______________ test_abrupt_server_restart_while_heavy_replication ______________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_abrupt_server_restart_while_heavy_replication(started_cluster): def transaction(thread_id): if thread_id % 2: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=True, ) else: conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) if thread_id % 2 == 0: conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(tables_num=NUM_TABLES, numbers=0) threads = [] threads_num = 6 for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() # Join here because it takes time for data to reach wal instance.restart_clickhouse() > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:820: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:28:38 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:38 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:28:38 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:28:41 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:True cmd: ['bash', '-c', 'ps -C clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:41 [ 549 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'ps -C clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:28:41 [ 549 ] DEBUG : Stdout: PID TTY TIME CMD (cluster.py:121, run_and_check) 2025-06-08 17:28:41 [ 549 ] DEBUG : Stdout: 8 ? 00:00:06 clickhouse (cluster.py:121, run_and_check) 2025-06-08 17:28:41 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'pkill clickhouse'] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:41 [ 549 ] DEBUG : Command:['docker', 'exec', '-u', 'root', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'pkill clickhouse'] (cluster.py:113, run_and_check) 2025-06-08 17:28:41 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:41 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:42 [ 549 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:28:43 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:43 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:43 [ 549 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:28:44 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:44 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:44 [ 549 ] DEBUG : Stdout:8 (cluster.py:121, run_and_check) 2025-06-08 17:28:45 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:45 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:45 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:45 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:45 [ 549 ] DEBUG : No clickhouse process running. Start new one. (cluster.py:3817, start_clickhouse) 2025-06-08 17:28:45 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:45 [ 549 ] DEBUG : Command:['docker', 'exec', '-u', '0', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', 'clickhouse server --config-file=/etc/clickhouse-server/config.xml --log-file=/var/log/clickhouse-server/clickhouse-server.log --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log --daemon'] (cluster.py:113, run_and_check) 2025-06-08 17:28:46 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:46 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:46 [ 549 ] DEBUG : Stdout:765 (cluster.py:121, run_and_check) 2025-06-08 17:28:46 [ 549 ] DEBUG : Clickhouse process running. (cluster.py:3828, start_clickhouse) 2025-06-08 17:28:46 [ 549 ] DEBUG : run container_id:roottestpostgresqlreplicadatabaseengine1_instance_1 detach:False nothrow:False cmd: ['bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:2046, exec_in_container) 2025-06-08 17:28:46 [ 549 ] DEBUG : Command:['docker', 'exec', 'roottestpostgresqlreplicadatabaseengine1_instance_1', 'bash', '-c', "ps ax | grep 'clickhouse' | grep -v 'grep' | grep -v 'coproc' | grep -v 'bash -c' | awk '{print $1}'"] (cluster.py:113, run_and_check) 2025-06-08 17:28:46 [ 549 ] DEBUG : Stdout:765 (cluster.py:121, run_and_check) 2025-06-08 17:28:46 [ 549 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:28:47 [ 549 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:28:47 [ 549 ] DEBUG : Executing query select 20 on instance (cluster.py:3455, query) 2025-06-08 17:28:47 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:48 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:28:48 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:28:48 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:28:48 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:49 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _____________________ test_changing_replica_identity_value _____________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_changing_replica_identity_value(started_cluster): pg_manager.create_postgres_table("postgresql_replica") instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT 50 + number, number from numbers(50)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT 100 + number, number from numbers(50)" ) > check_tables_are_synchronized(instance, "postgresql_replica") test_postgresql_replica_database_engine_1/test.py:292: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica' in scope SELECT * FROM `test_database.postgresql_replica` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica exists in test_database Checking table is synchronized: test_database.postgresql_replica ------------------------------ Captured log call ------------------------------- 2025-06-08 17:28:49 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:49 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:28:49 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:28:50 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT 100 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:50 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:50 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:28:50 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:28:50 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:28:51 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:51 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:51 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_clickhouse_restart ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_clickhouse_restart(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:303: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:28:51 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:51 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:52 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:52 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:52 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:28:52 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:52 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:28:52 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:28:53 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:53 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:28:53 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:28:53 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:28:53 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:53 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:54 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_concurrent_transactions _________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_concurrent_transactions(started_cluster): def transaction(thread_id): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, auto_commit=False, ) cursor = conn.cursor() for query in queries: cursor.execute(query.format(thread_id)) print("thread {}, query {}".format(thread_id, query)) conn.commit() NUM_TABLES = 6 pg_manager.create_and_fill_postgres_tables(NUM_TABLES, numbers=0) threads = [] threads_num = 6 for i in range(threads_num): threads.append(threading.Thread(target=transaction, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 0.5)) thread.start() for thread in threads: thread.join() for i in range(NUM_TABLES): > check_tables_are_synchronized(instance, f"postgresql_replica_{i}") test_postgresql_replica_database_engine_1/test.py:691: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 1, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 1, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 0, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 2, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 0, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 1, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 1, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 2, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 0, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 2, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 1, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 3, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 2, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 0, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 0, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 1, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 2, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 1, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 4, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 3, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 4, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 3, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 4, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(0, 10000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE (value*value) % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value - 125 WHERE key % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET key=key+20000 WHERE key%2=0 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(40000, 50000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE key % 10 = 0; thread 3, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 3, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query UPDATE postgresql_replica_{} SET value = value + 101 WHERE key % 2 = 1; thread 2, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET key=key+80000 WHERE key%2=1 thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 2 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2000 WHERE key % 5 = 0; thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(200000, 250000) as t(i); thread 2, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 3 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value * 2 WHERE key % 3 = 0; thread 2, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query UPDATE postgresql_replica_{} SET key=key+500000 WHERE key%2=1 thread 3, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 5, query INSERT INTO postgresql_replica_{} select i, i from generate_series(1000000, 1050000) as t(i); thread 5, query DELETE FROM postgresql_replica_{} WHERE value % 9 = 2; thread 3, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 3, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 4, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 5, query UPDATE postgresql_replica_{} SET key=key+10000000 thread 4, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 4, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; thread 5, query UPDATE postgresql_replica_{} SET value = value + 2 WHERE key % 3 = 1; thread 5, query DELETE FROM postgresql_replica_{} WHERE value%5 = 0; Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:28:54 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:54 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:28:54 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:28:57 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:57 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:28:57 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) __________________________ test_different_data_types ___________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_different_data_types(started_cluster): conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() cursor.execute("drop table if exists test_data_types;") cursor.execute("drop table if exists test_array_data_type;") cursor.execute( """CREATE TABLE test_data_types ( id integer PRIMARY KEY, a smallint, b integer, c bigint, d real, e double precision, f serial, g bigserial, h timestamp, i date, j decimal(5, 5), k numeric(5, 5))""" ) cursor.execute( """CREATE TABLE test_array_data_type ( key Integer NOT NULL PRIMARY KEY, a Date[] NOT NULL, -- Date b Timestamp[] NOT NULL, -- DateTime64(6) c real[][] NOT NULL, -- Float32 d double precision[][] NOT NULL, -- Float64 e decimal(5, 5)[][][] NOT NULL, -- Decimal32 f integer[][][] NOT NULL, -- Int32 g Text[][][][][] NOT NULL, -- String h Integer[][][], -- Nullable(Int32) i Char(2)[][][][], -- Nullable(String) k Char(2)[] -- Nullable(String) )""" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for i in range(10): instance.query( """ INSERT INTO postgres_database.test_data_types VALUES ({}, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2)""".format( i ) ) > check_tables_are_synchronized(instance, "test_data_types", "id") test_postgresql_replica_database_engine_1/test.py:170: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_data_types' in scope SELECT * FROM `test_database.test_data_types` ORDER BY id ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_data_types` order by id;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Checking table test_data_types exists in test_database Checking table is synchronized: test_database.test_data_types ------------------------------ Captured log call ------------------------------- 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:28:58 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:28:59 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (0, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:28:59 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (1, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:28:59 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (2, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:28:59 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (3, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:28:59 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (4, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:28:59 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (5, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:29:00 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (6, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:29:00 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (7, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:29:00 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (8, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:29:00 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_data_types VALUES (9, -32768, -2147483648, -9223372036854775808, 1.12345, 1.1234567890, 2147483647, 9223372036854775807, '2000-05-12 12:12:12.012345', '2000-05-12', 0.2, 0.2) on instance (cluster.py:3455, query) 2025-06-08 17:29:01 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:01 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`test_data_types` order by id; on instance (cluster.py:3455, query) 2025-06-08 17:29:01 [ 549 ] DEBUG : Executing query select * from `test_database.test_data_types` order by id; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:01 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:01 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:02 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:02 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ____________________ test_load_and_sync_all_database_tables ____________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_load_and_sync_all_database_tables(started_cluster): NUM_TABLES = 5 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_several_tables_are_synchronized(instance, NUM_TABLES) test_postgresql_replica_database_engine_1/test.py:74: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:419: in check_several_tables_are_synchronized check_tables_are_synchronized( helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:18 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:18 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:18 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:18 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:18 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:19 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:19 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:19 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:19 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:19 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:29:20 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:20 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:20 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:20 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:20 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________ test_load_and_sync_subset_of_database_tables _________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_load_and_sync_subset_of_database_tables(started_cluster): NUM_TABLES = 10 pg_manager.create_and_fill_postgres_tables(NUM_TABLES) publication_tables = "" for i in range(NUM_TABLES): if i < int(NUM_TABLES / 2): if publication_tables != "": publication_tables += ", " publication_tables += f"postgresql_replica_{i}" pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[ "materialized_postgresql_tables_list = '{}'".format(publication_tables) ], ) time.sleep(1) for i in range(int(NUM_TABLES / 2)): table_name = f"postgresql_replica_{i}" assert_nested_table_is_created(instance, table_name) result = instance.query( """SELECT count() FROM system.tables WHERE database = 'test_database';""" ) assert int(result) == int(NUM_TABLES / 2) database_tables = instance.query("SHOW TABLES FROM test_database") for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) if i < int(NUM_TABLES / 2): assert table_name in database_tables else: assert table_name not in database_tables instance.query( "INSERT INTO postgres_database.{} SELECT 50 + number, {} from numbers(100)".format( table_name, i ) ) for i in range(NUM_TABLES): table_name = f"postgresql_replica_{i}" if i < int(NUM_TABLES / 2): > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:276: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica_0' in scope SELECT * FROM `test_database.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_5" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_6" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_7" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_8" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_9" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table postgresql_replica_0 exists in test_database Checking table postgresql_replica_1 exists in test_database Checking table postgresql_replica_2 exists in test_database Checking table postgresql_replica_3 exists in test_database Checking table postgresql_replica_4 exists in test_database Checking table postgresql_replica_0 exists in test_database Checking table is synchronized: test_database.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:20 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_0` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:21 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_1` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:21 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_2` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:21 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_3` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:21 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_4` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:21 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_5` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:21 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_6` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:22 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_7` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:22 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_8` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:22 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`postgresql_replica_9` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:22 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:22 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'postgresql_replica_0, postgresql_replica_1, postgresql_replica_2, postgresql_replica_3, postgresql_replica_4' on instance (cluster.py:3455, query) 2025-06-08 17:29:22 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:24 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:24 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:24 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:24 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:24 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:24 [ 549 ] DEBUG : Executing query SELECT count() FROM system.tables WHERE database = 'test_database'; on instance (cluster.py:3455, query) 2025-06-08 17:29:25 [ 549 ] DEBUG : Executing query SHOW TABLES FROM test_database on instance (cluster.py:3455, query) 2025-06-08 17:29:25 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_0 SELECT 50 + number, 0 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:25 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_1 SELECT 50 + number, 1 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:26 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_2 SELECT 50 + number, 2 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:26 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_3 SELECT 50 + number, 3 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:26 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_4 SELECT 50 + number, 4 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:26 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_5 SELECT 50 + number, 5 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:27 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_6 SELECT 50 + number, 6 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:27 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_7 SELECT 50 + number, 7 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:27 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_8 SELECT 50 + number, 8 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:28 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica_9 SELECT 50 + number, 9 from numbers(100) on instance (cluster.py:3455, query) 2025-06-08 17:29:28 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:28 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:29:29 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:29 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:29 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:29 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:30 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_many_concurrent_queries _________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_many_concurrent_queries(started_cluster): table = "test_many_conc" query_pool = [ "DELETE FROM {} WHERE (value*value) % 3 = 0;", "UPDATE {} SET value = value - 125 WHERE key % 2 = 0;", "DELETE FROM {} WHERE key % 10 = 0;", "UPDATE {} SET value = value*5 WHERE key % 2 = 1;", "DELETE FROM {} WHERE value % 2 = 0;", "UPDATE {} SET value = value + 2000 WHERE key % 5 = 0;", "DELETE FROM {} WHERE value % 3 = 0;", "UPDATE {} SET value = value * 2 WHERE key % 3 = 0;", "DELETE FROM {} WHERE value % 9 = 2;", "UPDATE {} SET value = value + 2 WHERE key % 3 = 1;", "DELETE FROM {} WHERE value%5 = 0;", ] NUM_TABLES = 5 conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, ) cursor = conn.cursor() pg_manager.create_and_fill_postgres_tables( NUM_TABLES, numbers=10000, table_name_base=table ) def attack(thread_id): print("thread {}".format(thread_id)) k = 10000 for i in range(20): query_id = random.randrange(0, len(query_pool) - 1) table_id = random.randrange(0, 5) # num tables random_table_name = f"{table}_{table_id}" table_name = f"{table}_{thread_id}" # random update / delete query cursor.execute(query_pool[query_id].format(random_table_name)) print( "Executing for table {} query: {}".format( random_table_name, query_pool[query_id] ) ) # allow some thread to do inserts (not to violate key constraints) if thread_id < 5: print("try insert table {}".format(thread_id)) instance.query( "INSERT INTO postgres_database.{} SELECT {}*10000*({} + number), number from numbers(1000)".format( table_name, thread_id, k ) ) k += 1 print("insert table {} ok".format(thread_id)) if i == 5: # also change primary key value print("try update primary key {}".format(thread_id)) cursor.execute( "UPDATE {table}_{} SET key=key%100000+100000*{} WHERE key%{}=0".format( table_name, i + 1, i + 1 ) ) print("update primary key {} ok".format(thread_id)) n = [10000] threads = [] threads_num = 16 for i in range(threads_num): threads.append(threading.Thread(target=attack, args=(i,))) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) for thread in threads: time.sleep(random.uniform(0, 1)) thread.start() n[0] = 50000 for table_id in range(NUM_TABLES): n[0] += 1 table_name = f"{table}_{table_id}" instance.query( "INSERT INTO postgres_database.{} SELECT {} + number, number from numbers(5000)".format( table_name, n[0] ) ) # cursor.execute("UPDATE {table}_{} SET key=key%100000+100000*{} WHERE key%{}=0".format(table_id, table_id+1, table_id+1)) for thread in threads: thread.join() for i in range(NUM_TABLES): table_name = f"{table}_{i}" > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:492: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.test_many_conc_0' in scope SELECT * FROM `test_database.test_many_conc_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.test_many_conc_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "test_many_conc_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "test_many_conc_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) thread 0 Executing for table test_many_conc_3 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; try insert table 0 thread 1 Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; try insert table 1 insert table 1 ok Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 3 = 0; try insert table 1 thread 2 Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; try insert table 2 insert table 2 ok Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; try insert table 2 thread 3 Executing for table test_many_conc_0 query: DELETE FROM {} WHERE (value*value) % 3 = 0; try insert table 3 thread 4 insert table 3 ok Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; try insert table 4 Executing for table test_many_conc_3 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; try insert table 3 thread 5 Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; insert table 4 ok Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; try insert table 4 Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 6 Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_0 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE (value*value) % 3 = 0; thread 7 thread 8 Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_4 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE (value*value) % 3 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 9 Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_3 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; thread 10 Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_0 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_3 query: UPDATE {} SET value = value*5 WHERE key % 2 = 1; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; thread 11 Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_2 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE key % 10 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_1 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_1 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; thread 12 Executing for table test_many_conc_3 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_1 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2000 WHERE key % 5 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_2 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_1 query: DELETE FROM {} WHERE key % 10 = 0; thread 13 Executing for table test_many_conc_2 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; thread 14 Executing for table test_many_conc_0 query: UPDATE {} SET value = value - 125 WHERE key % 2 = 0; Executing for table test_many_conc_3 query: UPDATE {} SET value = value * 2 WHERE key % 3 = 0; Executing for table test_many_conc_4 query: DELETE FROM {} WHERE value % 9 = 2; Executing for table test_many_conc_0 query: DELETE FROM {} WHERE value % 2 = 0; Executing for table test_many_conc_4 query: UPDATE {} SET value = value + 2 WHERE key % 3 = 1; thread 15 Checking table test_many_conc_0 exists in test_database Checking table is synchronized: test_database.test_many_conc_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:30 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_0` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:29:30 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_1` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:29:30 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_2` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:29:31 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_3` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:29:31 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`test_many_conc_4` SELECT number, number from numbers(10000) on instance (cluster.py:3455, query) 2025-06-08 17:29:31 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:31 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:31 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:32 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_0 SELECT 0*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:33 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 1*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:33 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 1*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:33 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:34 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 2*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:34 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 3*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:35 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 4*10000*(10000 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:35 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 3*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:35 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 4*10000*(10001 + number), number from numbers(1000) on instance (cluster.py:3455, query) 2025-06-08 17:29:39 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_0 SELECT 50001 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:29:40 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_1 SELECT 50002 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:29:40 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_2 SELECT 50003 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:29:40 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_3 SELECT 50004 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:29:40 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.test_many_conc_4 SELECT 50005 + number, number from numbers(5000) on instance (cluster.py:3455, query) 2025-06-08 17:29:40 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:41 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`test_many_conc_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:29:41 [ 549 ] DEBUG : Executing query select * from `test_database.test_many_conc_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:41 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:41 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:41 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:42 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ___________________________ test_multiple_databases ____________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_multiple_databases(started_cluster): NUM_TABLES = 5 conn = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=False, ) pg_manager.create_postgres_db("postgres_database_1") pg_manager.create_postgres_db("postgres_database_2") conn1 = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, database_name="postgres_database_1", ) conn2 = get_postgres_conn( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, database=True, database_name="postgres_database_2", ) cursor1 = conn1.cursor() cursor2 = conn2.cursor() pg_manager.create_clickhouse_postgres_db( "postgres_database_1", "", "postgres_database_1", ) pg_manager.create_clickhouse_postgres_db( "postgres_database_2", "", "postgres_database_2", ) cursors = [cursor1, cursor2] for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) create_postgres_table(cursors[cursor_id], table_name) instance.query( "INSERT INTO postgres_database_{}.{} SELECT number, number from numbers(50)".format( cursor_id + 1, table_name ) ) print( "database 1 tables: ", instance.query( """SELECT name FROM system.tables WHERE database = 'postgres_database_1';""" ), ) print( "database 2 tables: ", instance.query( """SELECT name FROM system.tables WHERE database = 'postgres_database_2';""" ), ) pg_manager.create_materialized_db( started_cluster.postgres_ip, started_cluster.postgres_port, "test_database_1", "postgres_database_1", ) pg_manager.create_materialized_db( started_cluster.postgres_ip, started_cluster.postgres_port, "test_database_2", "postgres_database_2", ) cursors = [cursor1, cursor2] for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) instance.query( "INSERT INTO postgres_database_{}.{} SELECT 50 + number, number from numbers(50)".format( cursor_id + 1, table_name ) ) for cursor_id in range(len(cursors)): for i in range(NUM_TABLES): table_name = "postgresql_replica_{}".format(i) > check_tables_are_synchronized( instance, table_name, "key", "postgres_database_{}".format(cursor_id + 1), "test_database_{}".format(cursor_id + 1), ) test_postgresql_replica_database_engine_1/test.py:648: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database_1.postgresql_replica_0' in scope SELECT * FROM `test_database_1.postgresql_replica_0` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database_1.postgresql_replica_0` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_0" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_1" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_2" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_3" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Query: CREATE TABLE IF NOT EXISTS "postgresql_replica_4" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) database 1 tables: postgresql_replica_0 postgresql_replica_1 postgresql_replica_2 postgresql_replica_3 postgresql_replica_4 database 2 tables: postgresql_replica_0 postgresql_replica_1 postgresql_replica_2 postgresql_replica_3 postgresql_replica_4 Checking table postgresql_replica_0 exists in test_database_1 Checking table is synchronized: test_database_1.postgresql_replica_0 ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:42 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_1" on instance (cluster.py:3455, query) 2025-06-08 17:29:42 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database_1" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database_1', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:42 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_2" on instance (cluster.py:3455, query) 2025-06-08 17:29:42 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database_2" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database_2', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:43 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_0 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:43 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_1 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:43 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_2 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:43 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_3 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:43 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_4 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:44 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_0 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:44 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_1 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:44 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_2 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:44 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_3 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:45 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_4 SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:45 [ 549 ] DEBUG : Executing query SELECT name FROM system.tables WHERE database = 'postgres_database_1'; on instance (cluster.py:3455, query) 2025-06-08 17:29:45 [ 549 ] DEBUG : Executing query SELECT name FROM system.tables WHERE database = 'postgres_database_2'; on instance (cluster.py:3455, query) 2025-06-08 17:29:45 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_1` on instance (cluster.py:3455, query) 2025-06-08 17:29:45 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database_1` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database_1', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:45 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:46 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_2` on instance (cluster.py:3455, query) 2025-06-08 17:29:46 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database_2` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database_2', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:46 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:47 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_0 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:47 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_1 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:47 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_2 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:47 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_3 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:47 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_1.postgresql_replica_4 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_0 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_1 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_2 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_3 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database_2.postgresql_replica_4 SELECT 50 + number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database_1` on instance (cluster.py:3455, query) 2025-06-08 17:29:48 [ 549 ] DEBUG : Executing query select * from `postgres_database_1`.`postgresql_replica_0` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:29:49 [ 549 ] DEBUG : Executing query select * from `test_database_1.postgresql_replica_0` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_2` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database_1` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_2" on instance (cluster.py:3455, query) 2025-06-08 17:29:49 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database_1" on instance (cluster.py:3455, query) 2025-06-08 17:29:50 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:50 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ________________________________ test_quoting_1 ________________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_quoting_1(started_cluster): table_name = "user" pg_manager.create_and_fill_postgres_table(table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:829: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.user' in scope SELECT * FROM `test_database.user` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.user` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "user" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table user exists in test_database Checking table is synchronized: test_database.user ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:50 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`user` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:50 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:50 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:51 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:51 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:51 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`user` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:29:51 [ 549 ] DEBUG : Executing query select * from `test_database.user` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:51 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:51 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:52 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:52 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) ________________________________ test_quoting_2 ________________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_quoting_2(started_cluster): table_name = "user" pg_manager.create_and_fill_postgres_table(table_name) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port, settings=[f"materialized_postgresql_tables_list = '{table_name}'"], ) > check_tables_are_synchronized(instance, table_name) test_postgresql_replica_database_engine_1/test.py:840: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.user' in scope SELECT * FROM `test_database.user` ORDER BY key ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.user` order by key;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "user" ( key Integer NOT NULL, value Integer, PRIMARY KEY(key)) Checking table user exists in test_database Checking table is synchronized: test_database.user ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:52 [ 549 ] DEBUG : Executing query INSERT INTO `postgres_database`.`user` SELECT number, number from numbers(50) on instance (cluster.py:3455, query) 2025-06-08 17:29:52 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:52 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') SETTINGS materialized_postgresql_tables_list = 'user' on instance (cluster.py:3455, query) 2025-06-08 17:29:53 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:53 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:53 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`user` order by key; on instance (cluster.py:3455, query) 2025-06-08 17:29:53 [ 549 ] DEBUG : Executing query select * from `test_database.user` order by key; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:53 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:53 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:54 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:54 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) _________________________ test_replica_identity_index __________________________ [gw1] linux -- Python 3.10.12 /usr/bin/python3 started_cluster = def test_replica_identity_index(started_cluster): pg_manager.create_postgres_table( "postgresql_replica", template=postgres_table_template_3 ) pg_manager.execute("CREATE unique INDEX idx on postgresql_replica(key1, key2);") pg_manager.execute( "ALTER TABLE postgresql_replica REPLICA IDENTITY USING INDEX idx" ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(50, 10)" ) pg_manager.create_materialized_db( ip=started_cluster.postgres_ip, port=started_cluster.postgres_port ) instance.query( "INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(100, 10)" ) > check_tables_are_synchronized(instance, "postgresql_replica", order_by="key1") test_postgresql_replica_database_engine_1/test.py:334: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ helpers/postgres_utility.py:392: in check_tables_are_synchronized result = instance.query(result_query) helpers/cluster.py:3456: in query return self.client.query( helpers/client.py:36: in wrap return func(self, *args, **kwargs) helpers/client.py:74: in query ).get_answer() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def get_answer(self): self.process.wait(timeout=DEFAULT_QUERY_TIMEOUT) self.stdout_file.seek(0) self.stderr_file.seek(0) stdout = self.stdout_file.read().decode("utf-8", errors="replace") stderr = self.stderr_file.read().decode("utf-8", errors="replace") if ( self.timer is not None and not self.process_finished_before_timeout and not self.ignore_error ): logging.debug(f"Timed out. Last stdout:{stdout}, stderr:{stderr}") raise QueryTimeoutExceedException("Client timed out!") if ( self.process.returncode != 0 or self.remove_trash_from_stderr(stderr) ) and not self.ignore_error: > raise QueryRuntimeException( "Client failed! Return code: {}, stderr: {}".format( self.process.returncode, stderr ), self.process.returncode, stderr, ) E helpers.client.QueryRuntimeException: Client failed! Return code: 60, stderr: Received exception from server (version 24.3.18): E Code: 60. DB::Exception: Received from 172.16.3.3:9000. DB::Exception: Unknown table expression identifier 'test_database.postgresql_replica' in scope SELECT * FROM `test_database.postgresql_replica` ORDER BY key1 ASC. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/exception:141: Poco::Exception::Exception(String const&, int) @ 0x000000003a554b73 E 1. ./build_docker/./src/Common/Exception.cpp:96: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001cb49070 E 2. DB::Exception::Exception(int, FormatStringHelperImpl::type, std::type_identity::type>, String const&, String&&) @ 0x000000000c5c35b6 E 3. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:6971: DB::(anonymous namespace)::QueryAnalyzer::resolveQuery(std::shared_ptr const&, DB::(anonymous namespace)::IdentifierResolveScope&) @ 0x000000002dbba909 E 4. ./build_docker/./src/Analyzer/Passes/QueryAnalysisPass.cpp:0: DB::QueryAnalysisPass::run(std::shared_ptr&, std::shared_ptr) @ 0x000000002dbabc57 E 5. ./build_docker/./src/Analyzer/QueryTreePassManager.cpp:0: DB::QueryTreePassManager::run(std::shared_ptr) @ 0x000000002dba2d0e E 6. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:0: DB::(anonymous namespace)::buildQueryTreeAndRunPasses(std::shared_ptr const&, DB::SelectQueryOptions const&, std::shared_ptr const&, std::shared_ptr const&) @ 0x000000002e5ea236 E 7. ./build_docker/./src/Interpreters/InterpreterSelectQueryAnalyzer.cpp:160: DB::InterpreterSelectQueryAnalyzer::InterpreterSelectQueryAnalyzer(std::shared_ptr const&, std::shared_ptr const&, DB::SelectQueryOptions const&, std::vector> const&) @ 0x000000002e5e2d54 E 8. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:0: std::__unique_if::__unique_single std::make_unique[abi:v15000]&, std::shared_ptr const&, DB::SelectQueryOptions const&>(std::shared_ptr&, std::shared_ptr const&, DB::SelectQueryOptions const&) @ 0x000000002e5ede81 E 9. ./contrib/llvm-project/libcxx/include/__functional/function.h:0: ? @ 0x000000002e49edbd E 10. ./build_docker/./src/Interpreters/executeQuery.cpp:0: DB::executeQueryImpl(char const*, char const*, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x000000002f22694a E 11. ./build_docker/./src/Interpreters/executeQuery.cpp:1376: DB::executeQuery(String const&, std::shared_ptr, DB::QueryFlags, DB::QueryProcessingStage::Enum) @ 0x000000002f21d9ca E 12. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:612: DB::TCPHandler::runImpl() @ 0x000000003272de3d E 13. ./build_docker/./src/Server/TCPHandler.cpp:2341: DB::TCPHandler::run() @ 0x0000000032772891 E 14. ./build_docker/./base/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x000000003a25ceaf E 15. ./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48: Poco::Net::TCPServerDispatcher::run() @ 0x000000003a25db57 E 16. ./build_docker/./base/poco/Foundation/src/ThreadPool.cpp:0: Poco::PooledThread::run() @ 0x000000003a66ab3c E 17. ./base/poco/Foundation/include/Poco/SharedPtr.h:231: Poco::ThreadImpl::runnableEntry(void*) @ 0x000000003a663b28 E 18. asan_thread_start(void*) @ 0x000000000a7b9edb E 19. ? @ 0x00007f4f52745ac3 E 20. ? @ 0x00007f4f527d7850 E . (UNKNOWN_TABLE) E (query: select * from `test_database.postgresql_replica` order by key1;) helpers/client.py:239: QueryRuntimeException ---------------------------- Captured stdout setup ----------------------------- PostgreSQL is available - running test ----------------------------- Captured stdout call ----------------------------- Query: CREATE TABLE IF NOT EXISTS "postgresql_replica" ( key1 Integer NOT NULL, value1 Integer, key2 Integer NOT NULL, value2 Integer NOT NULL) Checking table postgresql_replica exists in test_database Checking table is synchronized: test_database.postgresql_replica ------------------------------ Captured log call ------------------------------- 2025-06-08 17:29:54 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(50, 10) on instance (cluster.py:3455, query) 2025-06-08 17:29:54 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:54 [ 549 ] DEBUG : Executing query CREATE DATABASE `test_database` ENGINE = MaterializedPostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:54 [ 549 ] DEBUG : Executing query SHOW DATABASES on instance (cluster.py:3455, query) 2025-06-08 17:29:55 [ 549 ] DEBUG : Executing query INSERT INTO postgres_database.postgresql_replica SELECT number, number, number, number from numbers(100, 10) on instance (cluster.py:3455, query) 2025-06-08 17:29:55 [ 549 ] DEBUG : Executing query SHOW TABLES FROM `test_database` on instance (cluster.py:3455, query) 2025-06-08 17:29:55 [ 549 ] DEBUG : Executing query select * from `postgres_database`.`postgresql_replica` order by key1; on instance (cluster.py:3455, query) 2025-06-08 17:29:55 [ 549 ] DEBUG : Executing query select * from `test_database.postgresql_replica` order by key1; on instance (cluster.py:3455, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-08 17:29:55 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS `test_database` SYNC on instance (cluster.py:3455, query) 2025-06-08 17:29:56 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:56 [ 549 ] DEBUG : Executing query DROP DATABASE IF EXISTS "postgres_database" on instance (cluster.py:3455, query) 2025-06-08 17:29:56 [ 549 ] DEBUG : Executing query CREATE DATABASE "postgres_database" ENGINE = PostgreSQL('172.16.3.2:5432', 'postgres_database', 'postgres', 'mysecretpassword') on instance (cluster.py:3455, query) 2025-06-08 17:29:56 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'stop', '--timeout', '20'] (cluster.py:113, run_and_check) 2025-06-08 17:29:57 [ 549 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:29:57 [ 549 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:29:57 [ 549 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:29:57 [ 549 ] DEBUG : Stderr:Stopping roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:29:57 [ 549 ] DEBUG : Command:['bash', '-c', '[ -f /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/logs/stderr.log* || true'] (cluster.py:113, run_and_check) 2025-06-08 17:29:57 [ 549 ] DEBUG : Command:['docker-compose', '--env-file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/.env', '--project-name', 'roottestpostgresqlreplicadatabaseengine1', '--file', '/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/_instances_0/instance/docker-compose.yml', '--file', '/ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_postgres.yml', 'down', '--volumes'] (cluster.py:113, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_instance_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... (cluster.py:123, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_instance_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stderr:Removing roottestpostgresqlreplicadatabaseengine1_postgres1_1 ... done (cluster.py:123, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stderr:Removing network roottestpostgresqlreplicadatabaseengine1_default (cluster.py:123, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Cleanup called (cluster.py:801, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : Docker networks for project roottestpostgresqlreplicadatabaseengine1 are NETWORK ID NAME DRIVER SCOPE (cluster.py:780, print_all_docker_pieces) 2025-06-08 17:29:58 [ 549 ] DEBUG : Docker containers for project roottestpostgresqlreplicadatabaseengine1 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:788, print_all_docker_pieces) 2025-06-08 17:29:58 [ 549 ] DEBUG : Docker volumes for project roottestpostgresqlreplicadatabaseengine1 are DRIVER VOLUME NAME (cluster.py:796, print_all_docker_pieces) 2025-06-08 17:29:58 [ 549 ] DEBUG : Command:docker container list --all --filter name='^/roottestpostgresqlreplicadatabaseengine1_.*_1$' --format '{{.ID}}:{{.Names}}' (cluster.py:113, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Unstopped containers: {} (cluster.py:815, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : No running containers for project: roottestpostgresqlreplicadatabaseengine1 (cluster.py:829, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : Trying to prune unused networks... (cluster.py:835, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : Trying to prune unused images... (cluster.py:851, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : Command:['docker', 'image', 'prune', '-f'] (cluster.py:113, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:121, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Images pruned (cluster.py:854, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : Trying to prune unused volumes... (cluster.py:860, cleanup) 2025-06-08 17:29:58 [ 549 ] DEBUG : Command:['docker volume ls | wc -l'] (cluster.py:113, run_and_check) 2025-06-08 17:29:58 [ 549 ] DEBUG : Stdout:3 (cluster.py:121, run_and_check) =============================== warnings summary =============================== test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries /usr/local/lib/python3.10/dist-packages/_pytest/threadexception.py:73: PytestUnhandledThreadExceptionWarning: Exception in thread Thread-37 (attack) Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/ClickHouse/tests/integration/test_postgresql_replica_database_engine_1/test.py", line 433, in attack cursor.execute(query_pool[query_id].format(random_table_name)) psycopg2.errors.NumericValueOutOfRange: integer out of range warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg)) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ============================== slowest durations =============================== 234.05s call test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node] 99.63s call test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node] 61.53s call test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk] 45.06s setup test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication 32.20s call test_jbod_ha/test.py::test_jbod_ha 32.17s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0] 29.25s setup test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams 28.13s setup test_merge_tree_s3/test.py::test_alter_table_columns[node] 23.01s teardown test_postgresql_protocol/test.py::test_python_client 22.38s teardown test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1] 22.36s teardown test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader 22.09s teardown test_merge_tree_s3/test.py::test_table_manipulations[node] 21.77s teardown test_jdbc_bridge/test.py::test_jdbc_update 21.47s setup test_merge_tree_load_parts/test.py::test_merge_tree_load_parts 20.63s call test_interserver_dns_retires/test.py::test_query 20.12s setup test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0] 20.07s call test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node] 19.80s setup test_jbod_ha/test.py::test_jbod_ha 19.79s setup test_keeper_availability_zone/test.py::test_get_availability_zone 19.13s setup test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader 18.99s setup test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node 18.47s call test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted 18.39s call test_merge_tree_load_parts/test.py::test_merge_tree_load_parts 17.94s setup test_jdbc_bridge/test.py::test_jdbc_delete 17.79s setup test_keeper_and_access_storage/test.py::test_create_replicated 17.30s setup test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 17.10s setup test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2] 16.46s setup test_placement_info/test.py::test_placement_info_from_config 16.21s setup test_partition/test.py::test_attach_check_all_parts 16.07s setup test_keeper_secure_client/test.py::test_connection 15.75s setup test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 15.67s setup test_keeper_memory_soft_limit/test.py::test_soft_limit_create 15.02s call test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished 14.71s call test_keeper_persistent_log_multinode/test.py::test_restart_multinode 13.89s teardown test_partition/test.py::test_system_detached_parts 13.81s setup test_postgresql_protocol/test.py::test_java_client 13.56s setup test_log_levels_update/test.py::test_log_levels_update 13.55s teardown test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 11.67s call test_partition/test.py::test_system_detached_parts 11.42s call test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader 11.17s call test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 10.96s teardown test_keeper_memory_soft_limit/test.py::test_soft_limit_create 10.19s setup test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local 10.19s call test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 10.01s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1] 9.99s call test_jdbc_bridge/test.py::test_jdbc_delete 9.92s call test_merge_tree_s3/test.py::test_alter_table_columns[node] 9.49s teardown test_insert_into_distributed/test.py::test_table_function 9.48s setup test_keeper_persistent_log_multinode/test.py::test_restart_multinode 8.36s call test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 7.62s setup test_library_bridge/test_exiled.py::test_bridge_dies_with_parent 7.58s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1] 7.08s call test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 6.87s call test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 6.87s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0] 6.71s call test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node] 6.54s teardown test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed 6.43s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0] 6.06s teardown test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table 5.95s call test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node] 5.58s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1] 5.06s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0] 4.99s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1] 4.96s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1] 4.94s teardown test_keeper_secure_client/test.py::test_connection 4.91s call test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0] 4.71s call test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 4.59s call test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node] 4.55s call test_partition/test.py::test_make_clone_in_detached 4.54s teardown test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 4.40s teardown test_placement_info/test.py::test_placement_info_missing_data 4.37s call test_placement_info/test.py::test_placement_info_from_imds 4.32s call test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node] 4.29s teardown test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local 4.19s call test_insert_into_distributed/test.py::test_prefer_localhost_replica 4.15s call test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node] 4.07s call test_merge_tree_s3/test.py::test_attach_detach_partition[node] 3.91s call test_merge_tree_s3/test.py::test_table_manipulations[node] 3.82s call test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part] 3.80s call test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 3.75s teardown test_log_levels_update/test.py::test_log_levels_update 3.67s teardown test_jbod_ha/test.py::test_jbod_ha 3.54s call test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3] 3.39s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 3.36s call test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node] 3.33s call test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node] 3.31s call test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node] 3.10s call test_placement_info/test.py::test_placement_info_from_file 3.08s teardown test_keeper_availability_zone/test.py::test_get_availability_zone 2.98s teardown test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 2.97s call test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2] 2.97s call test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node] 2.86s call test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where 2.76s call test_postgresql_replica_database_engine_1/test.py::test_different_data_types 2.64s call test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where 2.56s teardown test_keeper_persistent_log_multinode/test.py::test_restart_multinode 2.42s call test_merge_tree_s3/test.py::test_freeze_unfreeze[node] 2.36s call test_log_levels_update/test.py::test_log_levels_update 2.32s call test_partition/test.py::test_attach_check_all_parts 2.28s call test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1] 2.26s call test_partition/test.py::test_detached_part_dir_exists 2.26s call test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 2.26s call test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 2.25s call test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1] 2.24s call test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0] 2.20s call test_partition/test.py::test_partition_complex 2.16s call test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1] 2.16s call test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams 2.15s call test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2] 2.10s teardown test_library_bridge/test_exiled.py::test_bridge_dies_with_parent 2.09s teardown test_keeper_and_access_storage/test.py::test_create_replicated 2.00s call test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node] 2.00s call test_partition/test.py::test_drop_detached_parts 1.90s call test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 1.69s call test_insert_over_http_query_log/test.py::test_insert_over_http_ok 1.44s setup test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3] 1.31s teardown test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 1.29s call test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 1.28s call test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 1.11s call test_postgresql_replica_database_engine_1/test.py::test_quoting_1 1.01s call test_postgresql_replica_database_engine_1/test.py::test_quoting_2 0.95s teardown test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication 0.91s teardown test_postgresql_replica_database_engine_1/test.py::test_different_data_types 0.88s setup test_partition/test.py::test_drop_detached_parts 0.87s teardown test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 0.86s setup test_partition/test.py::test_partition_simple 0.86s teardown test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 0.85s call test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table 0.84s call test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local 0.83s setup test_partition/test.py::test_system_detached_parts 0.81s teardown test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 0.81s teardown test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.80s call test_jdbc_bridge/test.py::test_jdbc_insert 0.76s teardown test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 0.74s teardown test_postgresql_replica_database_engine_1/test.py::test_quoting_2 0.69s call test_placement_info/test.py::test_placement_info_missing_data 0.69s call test_postgresql_protocol/test.py::test_java_client 0.68s call test_placement_info/test.py::test_placement_info_from_config 0.68s call test_jdbc_bridge/test.py::test_jdbc_update 0.66s setup test_partition/test.py::test_partition_complex 0.63s call test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement 0.61s teardown test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 0.59s call test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node 0.57s teardown test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 0.56s call test_postgresql_protocol/test.py::test_psql_client 0.56s teardown test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 0.55s call test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication 0.55s setup test_partition/test.py::test_cannot_attach_active_part 0.55s call test_partition/test.py::test_cannot_attach_active_part 0.53s call test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed 0.51s teardown test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished 0.50s call test_merge_tree_s3/test.py::test_s3_no_delete_objects[node] 0.50s call test_partition/test.py::test_partition_simple 0.44s call test_jdbc_bridge/test.py::test_jdbc_table_engine 0.38s call test_insert_into_distributed/test.py::test_table_function 0.36s call test_keeper_memory_soft_limit/test.py::test_soft_limit_create 0.33s teardown test_partition/test.py::test_drop_detached_parts 0.33s call test_library_bridge/test_exiled.py::test_bridge_dies_with_parent 0.32s call test_jdbc_bridge/test.py::test_jdbc_distributed_query 0.28s teardown test_partition/test.py::test_attach_check_all_parts 0.27s call test_keeper_availability_zone/test.py::test_get_availability_zone 0.27s call test_jdbc_bridge/test.py::test_jdbc_query 0.22s call test_keeper_secure_client/test.py::test_connection 0.17s call test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 0.17s call test_keeper_and_access_storage/test.py::test_create_replicated 0.17s teardown test_partition/test.py::test_partition_simple 0.17s teardown test_partition/test.py::test_partition_complex 0.12s teardown test_partition/test.py::test_cannot_attach_active_part 0.10s setup test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] 0.10s setup test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] 0.09s setup test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] 0.09s setup test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node] 0.07s call test_postgresql_protocol/test.py::test_python_client 0.07s setup test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node] 0.06s setup test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part] 0.00s setup test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node] 0.00s setup test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node] 0.00s setup test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node] 0.00s setup test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node] 0.00s setup test_merge_tree_s3/test.py::test_s3_no_delete_objects[node] 0.00s setup test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node] 0.00s setup test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node] 0.00s setup test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node] 0.00s setup test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node] 0.00s setup test_merge_tree_s3/test.py::test_table_manipulations[node] 0.00s setup test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node] 0.00s setup test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node] 0.00s setup test_merge_tree_s3/test.py::test_freeze_unfreeze[node] 0.00s setup test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node] 0.00s setup test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk] 0.00s setup test_merge_tree_s3/test.py::test_attach_detach_partition[node] 0.00s teardown test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk] 0.00s teardown test_interserver_dns_retires/test.py::test_query 0.00s call test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] 0.00s setup test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 0.00s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams 0.00s setup test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value 0.00s setup test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1] 0.00s teardown test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted 0.00s teardown test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2] 0.00s setup test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 0.00s teardown test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_quoting_2 0.00s setup test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where 0.00s teardown test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication 0.00s setup test_placement_info/test.py::test_placement_info_from_file 0.00s teardown test_placement_info/test.py::test_placement_info_from_config 0.00s setup test_insert_into_distributed/test.py::test_prefer_localhost_replica 0.00s setup test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table 0.00s teardown test_merge_tree_load_parts/test.py::test_merge_tree_load_parts 0.00s teardown test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node] 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1] 0.00s teardown test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node] 0.00s teardown test_postgresql_protocol/test.py::test_java_client 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart 0.00s setup test_interserver_dns_retires/test.py::test_query 0.00s teardown test_placement_info/test.py::test_placement_info_from_imds 0.00s setup test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1] 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0] 0.00s teardown test_merge_tree_s3/test.py::test_freeze_unfreeze[node] 0.00s setup test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted 0.00s teardown test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node] 0.00s setup test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2] 0.00s setup test_placement_info/test.py::test_placement_info_missing_data 0.00s setup test_insert_over_http_query_log/test.py::test_insert_over_http_ok 0.00s setup test_partition/test.py::test_detached_part_dir_exists 0.00s setup test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed 0.00s setup test_partition/test.py::test_make_clone_in_detached 0.00s teardown test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node] 0.00s setup test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0] 0.00s teardown test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node] 0.00s teardown test_merge_tree_s3/test.py::test_alter_table_columns[node] 0.00s call test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] 0.00s teardown test_merge_tree_s3/test.py::test_s3_no_delete_objects[node] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_different_data_types 0.00s teardown test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node] 0.00s teardown test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3] 0.00s teardown test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node] 0.00s teardown test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished 0.00s teardown test_insert_over_http_query_log/test.py::test_insert_over_http_ok 0.00s teardown test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node] 0.00s teardown test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node] 0.00s teardown test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node] 0.00s call test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables 0.00s teardown test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node] 0.00s teardown test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node] 0.00s teardown test_merge_tree_s3/test.py::test_attach_detach_partition[node] 0.00s teardown test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_multiple_databases 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions 0.00s setup test_postgresql_protocol/test.py::test_psql_client 0.00s teardown test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node] 0.00s teardown test_partition/test.py::test_make_clone_in_detached 0.00s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where 0.00s teardown test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node] 0.00s teardown test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1] 0.00s teardown test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0] 0.00s setup test_jdbc_bridge/test.py::test_jdbc_table_engine 0.00s teardown test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1] 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1] 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index 0.00s teardown test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2] 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0] 0.00s setup test_postgresql_replica_database_engine_1/test.py::test_quoting_1 0.00s setup test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1] 0.00s setup test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where 0.00s teardown test_partition/test.py::test_detached_part_dir_exists 0.00s teardown test_jdbc_bridge/test.py::test_jdbc_query 0.00s teardown test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part] 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1] 0.00s setup test_jdbc_bridge/test.py::test_jdbc_update 0.00s setup test_jdbc_bridge/test.py::test_jdbc_query 0.00s teardown test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where 0.00s teardown test_postgresql_protocol/test.py::test_psql_client 0.00s setup test_postgresql_protocol/test.py::test_python_client 0.00s setup test_jdbc_bridge/test.py::test_jdbc_insert 0.00s teardown test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1] 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0] 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0] 0.00s setup test_insert_into_distributed/test.py::test_table_function 0.00s setup test_placement_info/test.py::test_placement_info_from_imds 0.00s setup test_jdbc_bridge/test.py::test_jdbc_distributed_query 0.00s teardown test_jdbc_bridge/test.py::test_jdbc_delete 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1] 0.00s teardown test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0] 0.00s teardown test_jdbc_bridge/test.py::test_jdbc_distributed_query 0.00s teardown test_placement_info/test.py::test_placement_info_from_file 0.00s teardown test_insert_into_distributed/test.py::test_prefer_localhost_replica 0.00s teardown test_jdbc_bridge/test.py::test_jdbc_insert 0.00s teardown test_jdbc_bridge/test.py::test_jdbc_table_engine =========================== short test summary info ============================ FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication FAILED test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication FAILED test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value FAILED test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart FAILED test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions FAILED test_postgresql_replica_database_engine_1/test.py::test_different_data_types FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables FAILED test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables FAILED test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries FAILED test_postgresql_replica_database_engine_1/test.py::test_multiple_databases FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_1 - he... FAILED test_postgresql_replica_database_engine_1/test.py::test_quoting_2 - he... FAILED test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index PASSED test_placement_info/test.py::test_placement_info_from_config PASSED test_partition/test.py::test_attach_check_all_parts PASSED test_partition/test.py::test_cannot_attach_active_part PASSED test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2] PASSED test_placement_info/test.py::test_placement_info_from_file PASSED test_partition/test.py::test_detached_part_dir_exists PASSED test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0] PASSED test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1] PASSED test_placement_info/test.py::test_placement_info_from_imds PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement PASSED test_partition/test.py::test_drop_detached_parts PASSED test_placement_info/test.py::test_placement_info_missing_data PASSED test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1] PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_ok PASSED test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table PASSED test_jdbc_bridge/test.py::test_jdbc_delete PASSED test_jdbc_bridge/test.py::test_jdbc_distributed_query PASSED test_jdbc_bridge/test.py::test_jdbc_insert PASSED test_jdbc_bridge/test.py::test_jdbc_query PASSED test_jdbc_bridge/test.py::test_jdbc_table_engine PASSED test_partition/test.py::test_make_clone_in_detached PASSED test_jdbc_bridge/test.py::test_jdbc_update PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams PASSED test_partition/test.py::test_partition_complex PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where PASSED test_partition/test.py::test_partition_simple PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where PASSED test_merge_tree_s3/test.py::test_alter_table_columns[node] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0] PASSED test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree PASSED test_merge_tree_s3/test.py::test_attach_detach_partition[node] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1] PASSED test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication PASSED test_partition/test.py::test_system_detached_parts PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0] PASSED test_insert_into_distributed/test.py::test_prefer_localhost_replica PASSED test_insert_into_distributed/test.py::test_table_function PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node PASSED test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1] PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0] PASSED test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local PASSED test_postgresql_protocol/test.py::test_java_client PASSED test_postgresql_protocol/test.py::test_psql_client PASSED test_postgresql_protocol/test.py::test_python_client PASSED test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished PASSED test_keeper_availability_zone/test.py::test_get_availability_zone PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1] PASSED test_merge_tree_load_parts/test.py::test_merge_tree_load_parts PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0] PASSED test_keeper_and_access_storage/test.py::test_create_replicated PASSED test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1] PASSED test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted PASSED test_interserver_dns_retires/test.py::test_query PASSED test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader PASSED test_keeper_persistent_log_multinode/test.py::test_restart_multinode PASSED test_log_levels_update/test.py::test_log_levels_update PASSED test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk] PASSED test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node] PASSED test_merge_tree_s3/test.py::test_freeze_unfreeze[node] PASSED test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 PASSED test_jbod_ha/test.py::test_jbod_ha PASSED test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node] PASSED test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node] PASSED test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node] PASSED test_keeper_secure_client/test.py::test_connection PASSED test_keeper_memory_soft_limit/test.py::test_soft_limit_create PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part] PASSED test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node] PASSED test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node] PASSED test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node] PASSED test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node] PASSED test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node] PASSED test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node] PASSED test_merge_tree_s3/test.py::test_s3_no_delete_objects[node] PASSED test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node] PASSED test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node] PASSED test_merge_tree_s3/test.py::test_table_manipulations[node] SKIPPED [1] test_merge_tree_load_parts/test.py:227: Skip with debug build and sanitizers. This test intentionally triggers LOGICAL_ERROR which leads to crash with those builds SKIPPED [1] test_library_bridge/test_exiled.py:53: Leak sanitizer falsely reports about a leak of 16 bytes in clickhouse-odbc-bridge SKIPPED [3] test_merge_tree_s3/test.py:931: Disabled, will be fixed after https://github.com/ClickHouse/ClickHouse/issues/51152 ======= 13 failed, 82 passed, 5 skipped, 1 warning in 537.83s (0:08:57) ======== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 437, in subprocess.check_call(cmd, shell=True) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_y0o5jx --privileged --dns-search='.' --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-odbc-bridge:/clickhouse-odbc-bridge --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/_temp/test/build/clickhouse-library-bridge:/clickhouse-library-bridge --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=2cffe1eae894 -e DOCKER_BASE_TAG=2993bc2bf171 -e DOCKER_KERBERIZED_HADOOP_TAG=ce74919e88f5 -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=a2d3dc777d0c -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e CLICKHOUSE_USE_OLD_ANALYZER=1 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 test_insert_into_distributed/test.py::test_inserts_single_replica_no_internal_replication test_insert_into_distributed/test.py::test_prefer_localhost_replica test_insert_into_distributed/test.py::test_table_function test_insert_into_distributed_through_materialized_view/test.py::test_inserts_local 'test_insert_over_http_query_log/test.py::test_insert_over_http_exception[0]' 'test_insert_over_http_query_log/test.py::test_insert_over_http_exception[1]' test_insert_over_http_query_log/test.py::test_insert_over_http_invalid_statement test_insert_over_http_query_log/test.py::test_insert_over_http_ok test_insert_over_http_query_log/test.py::test_insert_over_http_unknown_table test_interserver_dns_retires/test.py::test_query test_jbod_ha/test.py::test_jbod_ha test_jdbc_bridge/test.py::test_jdbc_delete test_jdbc_bridge/test.py::test_jdbc_distributed_query test_jdbc_bridge/test.py::test_jdbc_insert test_jdbc_bridge/test.py::test_jdbc_query test_jdbc_bridge/test.py::test_jdbc_table_engine test_jdbc_bridge/test.py::test_jdbc_update test_keeper_and_access_storage/test.py::test_create_replicated test_keeper_availability_zone/test.py::test_get_availability_zone test_keeper_memory_soft_limit/test.py::test_soft_limit_create test_keeper_persistent_log_multinode/test.py::test_restart_multinode test_keeper_reconfig_remove/test.py::test_reconfig_remove_followers_from_3 test_keeper_reconfig_remove_many/test.py::test_reconfig_remove_2_and_leader test_keeper_secure_client/test.py::test_connection test_library_bridge/test_exiled.py::test_bridge_dies_with_parent test_log_levels_update/test.py::test_log_levels_update test_merge_tree_load_parts/test.py::test_merge_tree_load_parts test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_filesystem_error 'test_merge_tree_s3/test.py::test_alter_table_columns[node]' 'test_merge_tree_s3/test.py::test_attach_detach_partition[node]' 'test_merge_tree_s3/test.py::test_cache_with_full_disk_space[node_with_limited_disk]' 'test_merge_tree_s3/test.py::test_freeze_system_unfreeze[node]' 'test_merge_tree_s3/test.py::test_freeze_unfreeze[node]' 'test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[False-node]' 'test_merge_tree_s3/test.py::test_insert_same_partition_and_merge[True-node]' 'test_merge_tree_s3/test.py::test_lazy_seek_optimization_for_async_read[node]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_drop[node]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors[node-broken_s3_always_multi_part]' 'test_merge_tree_s3/test.py::test_merge_canceled_by_s3_errors_when_move[node]' 'test_merge_tree_s3/test.py::test_move_partition_to_another_disk[node]' 'test_merge_tree_s3/test.py::test_move_replace_partition_to_another_table[node]' 'test_merge_tree_s3/test.py::test_s3_disk_apply_new_settings[node]' 'test_merge_tree_s3/test.py::test_s3_disk_heavy_write_check_mem[node]' 'test_merge_tree_s3/test.py::test_s3_disk_reads_on_unstable_connection[node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory0-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory1-node]' 'test_merge_tree_s3/test.py::test_s3_engine_heavy_write_check_mem[in_flight_memory2-node]' 'test_merge_tree_s3/test.py::test_s3_no_delete_objects[node]' 'test_merge_tree_s3/test.py::test_simple_insert_select[0-16-node]' 'test_merge_tree_s3/test.py::test_simple_insert_select[8192-12-node]' 'test_merge_tree_s3/test.py::test_table_manipulations[node]' 'test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[0-4-2]' 'test_merge_tree_s3_with_cache/test.py::test_read_after_cache_is_wiped[8192-2-1]' 'test_merge_tree_s3_with_cache/test.py::test_write_is_cached[0-2]' 'test_merge_tree_s3_with_cache/test.py::test_write_is_cached[8192-1]' test_mutations_in_partitions_of_merge_tree/test.py::test_mutation_max_streams test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_with_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_merge_tree_without_where test_mutations_in_partitions_of_merge_tree/test.py::test_trivial_alter_in_partition_replicated_merge_tree 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-key-1]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[default-sipHash64(key)-1]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-key-1]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-0]' 'test_parallel_replicas_custom_key_load_balancing/test.py::test_parallel_replicas_custom_key_load_balancing[range-sipHash64(key)-1]' test_partition/test.py::test_attach_check_all_parts test_partition/test.py::test_cannot_attach_active_part test_partition/test.py::test_detached_part_dir_exists test_partition/test.py::test_drop_detached_parts test_partition/test.py::test_make_clone_in_detached test_partition/test.py::test_partition_complex test_partition/test.py::test_partition_simple test_partition/test.py::test_system_detached_parts test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_single_node test_peak_memory_usage/test.py::test_clickhouse_client_max_peak_memory_usage_distributed test_placement_info/test.py::test_placement_info_from_config test_placement_info/test.py::test_placement_info_from_file test_placement_info/test.py::test_placement_info_from_imds test_placement_info/test.py::test_placement_info_missing_data test_postgresql_protocol/test.py::test_java_client test_postgresql_protocol/test.py::test_psql_client test_postgresql_protocol/test.py::test_python_client test_postgresql_replica_database_engine_1/test.py::test_abrupt_connection_loss_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_abrupt_server_restart_while_heavy_replication test_postgresql_replica_database_engine_1/test.py::test_changing_replica_identity_value test_postgresql_replica_database_engine_1/test.py::test_clickhouse_restart test_postgresql_replica_database_engine_1/test.py::test_concurrent_transactions test_postgresql_replica_database_engine_1/test.py::test_different_data_types test_postgresql_replica_database_engine_1/test.py::test_drop_database_while_replication_startup_not_finished test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_all_database_tables test_postgresql_replica_database_engine_1/test.py::test_load_and_sync_subset_of_database_tables test_postgresql_replica_database_engine_1/test.py::test_many_concurrent_queries test_postgresql_replica_database_engine_1/test.py::test_multiple_databases test_postgresql_replica_database_engine_1/test.py::test_quoting_1 test_postgresql_replica_database_engine_1/test.py::test_quoting_2 test_postgresql_replica_database_engine_1/test.py::test_replica_identity_index -vvv" altinityinfra/integration-tests-runner:9d492c2eec24 ' returned non-zero exit status 1.